时间序列的建模在各种应用中变得越来越重要。总体而言,数据通过遵循不同的模式而演变,这些模式通常是由不同的用户行为引起的。给定时间序列,我们定义了进化基因以捕获潜在用户行为,并描述行为如何导致时间序列的产生。特别是,我们提出了一个统一的框架,该框架通过学习分类器来识别片段的不同演化基因,并通过估计片段的分布来实现对抗发电机来实现进化基因。基于合成数据集和五个现实世界数据集的实验结果表明,我们的方法不仅可以实现良好的预测结果(例如,在F1方面 +10.56%),还可以提供结果的解释。
translated by 谷歌翻译
Data augmentation (DA) is indispensable in modern machine learning and deep neural networks. The basic idea of DA is to construct new training data to improve the model's generalization by adding slightly disturbed versions of existing data or synthesizing new data. In this work, we review a small but essential subset of DA -- Mix-based Data Augmentation (MixDA) that generates novel samples by mixing multiple examples. Unlike conventional DA approaches based on a single-sample operation or requiring domain knowledge, MixDA is more general in creating a broad spectrum of new data and has received increasing attention in the community. We begin with proposing a new taxonomy classifying MixDA into, Mixup-based, Cutmix-based, and hybrid approaches according to a hierarchical view of the data mix. Various MixDA techniques are then comprehensively reviewed in a more fine-grained way. Owing to its generalization, MixDA has penetrated a variety of applications which are also completely reviewed in this work. We also examine why MixDA works from different aspects of improving model performance, generalization, and calibration while explaining the model behavior based on the properties of MixDA. Finally, we recapitulate the critical findings and fundamental challenges of current MixDA studies, and outline the potential directions for future works. Different from previous related works that summarize the DA approaches in a specific domain (e.g., images or natural language processing) or only review a part of MixDA studies, we are the first to provide a systematical survey of MixDA in terms of its taxonomy, methodology, applications, and explainability. This work can serve as a roadmap to MixDA techniques and application reviews while providing promising directions for researchers interested in this exciting area.
translated by 谷歌翻译
The performance of a camera network monitoring a set of targets depends crucially on the configuration of the cameras. In this paper, we investigate the reconfiguration strategy for the parameterized camera network model, with which the sensing qualities of the multiple targets can be optimized globally and simultaneously. We first propose to use the number of pixels occupied by a unit-length object in image as a metric of the sensing quality of the object, which is determined by the parameters of the camera, such as intrinsic, extrinsic, and distortional coefficients. Then, we form a single quantity that measures the sensing quality of the targets by the camera network. This quantity further serves as the objective function of our optimization problem to obtain the optimal camera configuration. We verify the effectiveness of our approach through extensive simulations and experiments, and the results reveal its improved performance on the AprilTag detection tasks. Codes and related utilities for this work are open-sourced and available at https://github.com/sszxc/MultiCam-Simulation.
translated by 谷歌翻译
Designing safety-critical control for robotic manipulators is challenging, especially in a cluttered environment. First, the actual trajectory of a manipulator might deviate from the planned one due to the complex collision environments and non-trivial dynamics, leading to collision; Second, the feasible space for the manipulator is hard to obtain since the explicit distance functions between collision meshes are unknown. By analyzing the relationship between the safe set and the controlled invariant set, this paper proposes a data-driven control barrier function (CBF) construction method, which extracts CBF from distance samples. Specifically, the CBF guarantees the controlled invariant property for considering the system dynamics. The data-driven method samples the distance function and determines the safe set. Then, the CBF is synthesized based on the safe set by a scenario-based sum of square (SOS) program. Unlike most existing linearization based approaches, our method reserves the volume of the feasible space for planning without approximation, which helps find a solution in a cluttered environment. The control law is obtained by solving a CBF-based quadratic program in real time, which works as a safe filter for the desired planning-based controller. Moreover, our method guarantees safety with the proven probabilistic result. Our method is validated on a 7-DOF manipulator in both real and virtual cluttered environments. The experiments show that the manipulator is able to execute tasks where the clearance between obstacles is in millimeters.
translated by 谷歌翻译
Recently, the dominant DETR-based approaches apply central-concept spatial prior to accelerate Transformer detector convergency. These methods gradually refine the reference points to the center of target objects and imbue object queries with the updated central reference information for spatially conditional attention. However, centralizing reference points may severely deteriorate queries' saliency and confuse detectors due to the indiscriminative spatial prior. To bridge the gap between the reference points of salient queries and Transformer detectors, we propose SAlient Point-based DETR (SAP-DETR) by treating object detection as a transformation from salient points to instance objects. In SAP-DETR, we explicitly initialize a query-specific reference point for each object query, gradually aggregate them into an instance object, and then predict the distance from each side of the bounding box to these points. By rapidly attending to query-specific reference region and other conditional extreme regions from the image features, SAP-DETR can effectively bridge the gap between the salient point and the query-based Transformer detector with a significant convergency speed. Our extensive experiments have demonstrated that SAP-DETR achieves 1.4 times convergency speed with competitive performance. Under the standard training scheme, SAP-DETR stably promotes the SOTA approaches by 1.0 AP. Based on ResNet-DC-101, SAP-DETR achieves 46.9 AP.
translated by 谷歌翻译
Solving variational image segmentation problems with hidden physics is often expensive and requires different algorithms and manually tunes model parameter. The deep learning methods based on the U-Net structure have obtained outstanding performances in many different medical image segmentation tasks, but designing such networks requires a lot of parameters and training data, not always available for practical problems. In this paper, inspired by traditional multi-phase convexity Mumford-Shah variational model and full approximation scheme (FAS) solving the nonlinear systems, we propose a novel variational-model-informed network (denoted as FAS-Unet) that exploits the model and algorithm priors to extract the multi-scale features. The proposed model-informed network integrates image data and mathematical models, and implements them through learning a few convolution kernels. Based on the variational theory and FAS algorithm, we first design a feature extraction sub-network (FAS-Solution module) to solve the model-driven nonlinear systems, where a skip-connection is employed to fuse the multi-scale features. Secondly, we further design a convolution block to fuse the extracted features from the previous stage, resulting in the final segmentation possibility. Experimental results on three different medical image segmentation tasks show that the proposed FAS-Unet is very competitive with other state-of-the-art methods in qualitative, quantitative and model complexity evaluations. Moreover, it may also be possible to train specialized network architectures that automatically satisfy some of the mathematical and physical laws in other image problems for better accuracy, faster training and improved generalization.The code is available at \url{https://github.com/zhuhui100/FASUNet}.
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译
近年来,深层词典学习(DDL)由于其对表示和视觉识别的有效性而引起了很多关注。为了充分利用不同样本的类别信息,我们提出了一种新型的深层词典学习模型,具有类内部约束(DDLIC)用于视觉分类。具体而言,我们在不同级别的中间表示上设计了类内部的紧凑性约束,以鼓励阶层内表示彼此之间更近,最终学习的表示形式变得更加歧视。阶段,我们的DDLIC以与训练阶段相似的方式执行层次贪婪优化。四个图像数据集的实验结果表明,我们的方法优于最新方法。
translated by 谷歌翻译
为了减少旅行延迟并提高能源效率的策略,在非信号交叉点上连接和自动驾驶汽车(CAV)的排在学术界越来越流行。但是,很少有研究试图建模最佳排大小与交叉路口周围的交通状况之间的关系。为此,这项研究提出了一个基于自动排的基于自主的交叉控制模型,该模型由深钢筋学习(DRL)技术提供动力。该模型框架具有以下两个级别:第一级采用了第一次发球(FCFS)基于预订的策略,该政策与非冲突的车道选择机制集成在一起,以确定车辆的通过优先级;第二级应用深度Q-Network算法来根据交叉路口的实时交通状况识别最佳排尺寸。在交通微模拟器进行测试时,我们提出的模型与最先进的方法相比,在旅行效率和燃料保护方面表现出卓越的性能。
translated by 谷歌翻译
弱监督的动作本地化旨在仅使用视频级别的分类标签在给定的视频中进行本地化和分类。因此,现有的弱监督行动定位方法的关键问题是从弱注释中对精确预测的有限监督。在这项工作中,我们提出了视频级别和摘要级别的举止,即等级的层次策略,即等级监督和等级一致性挖掘,以最大程度地利用给定的注释和预测一致性。为此,提出了一个分层采矿网络(HIM-NET)。具体而言,它在两种谷物中挖掘了分类的层次监督:一个是通过多个实例学习捕获的地面真理类别的视频级别存在;另一个是从互补标签的角度来看,每个负标签类别的摘要级别不存在,这是通过我们提出的互补标签学习优化的。至于层次结构的一致性,HIM-NET探讨了视频级别的共同作用具有相似性和摘要级别的前景背景对立,以进行判别表示学习和一致的前景背景分离。具体而言,预测差异被视为不确定性,可以选择对拟议的前后背景协作学习的高共识。全面的实验结果表明,HIM-NET优于Thumos14和ActivityNet1.3数据集的现有方法,该数据集具有较大的利润率,通过层次挖掘监督和一致性。代码将在GitHub上提供。
translated by 谷歌翻译